Pruning Deep Neural Network Models of Guitar Distortion Effects

نویسندگان

چکیده

Deep neural networks have been successfully used in the task of black-box modeling analog audio effects such as distortion. Improving processing speed and memory requirements inference step is desirable to allow models be on a wide range hardware concurrently with other software. In this paper, we propose new application recent advancements network pruning methods recurrent distortion using Long Short-Term Memory architecture. We compare efficacy method four different datasets; one pedal three vacuum tube amplifiers. Iterative magnitude allows us remove over 99% parameters from some without loss accuracy. evaluate real-time performance pruned find that 3x-4x speedup can achieved, compared an unpruned baseline. show training larger model then it outperforms equivalent hidden size. A listening test confirms does not degrade perceived sound quality, but may even slightly improve it. The proposed techniques design computationally efficient deep for electric guitar real time.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Automated Pruning for Deep Neural Network Compression

In this work we present a method to improve the pruning step of the current state-of-the-art methodology to compress neural networks. The novelty of the proposed pruning technique is in its differentiability, which allows pruning to be performed during the backpropagation phase of the network training. This enables an end-to-end learning and strongly reduces the training time. The technique is ...

متن کامل

Deep Neural Network Language Models

In recent years, neural network language models (NNLMs) have shown success in both peplexity and word error rate (WER) compared to conventional n-gram language models. Most NNLMs are trained with one hidden layer. Deep neural networks (DNNs) with more hidden layers have been shown to capture higher-level discriminative information about input features, and thus produce better networks. Motivate...

متن کامل

Structured Deep Neural Network Pruning via Matrix Pivoting

Deep Neural Networks (DNNs) are the key to the state-of-the-art machine vision, sensor fusion and audio/video signal processing. Unfortunately, their computation complexity and tight resource constraints on the Edge make them hard to leverage on mobile, embedded and IoT devices. Due to great diversity of Edge devices, DNN designers have to take into account the hardware platform and application...

متن کامل

A Deep Neural Network Compression Pipeline: Pruning, Quantization, Huffman Encoding

Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, We introduce a three stage pipeline: pruning, quantization and Huffman encoding, that work together to reduce the storage requirement of neural networks by 35× to 49× without affecting their accuracy. Our method...

متن کامل

Neural Network Pruning and Pruning Parameters

The default multilayer neural network topology is a fully interlayer connected one. This simplistic choice facilitates the design but it limits the performance of the resulting neural networks. The best-known methods for obtaining partially connected neural networks are the so called pruning methods which are used for optimizing both the size and the generalization capabilities of neural networ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE/ACM transactions on audio, speech, and language processing

سال: 2023

ISSN: ['2329-9304', '2329-9290']

DOI: https://doi.org/10.1109/taslp.2022.3223257